The Power of Human by Adam Waytz

The Power of Human by Adam Waytz

Author:Adam Waytz
Language: eng
Format: epub
Tags: Epub3
Publisher: W. W. Norton & Company


Playing to Moral Strengths

In the context of decision making, people expect robots to be utilitarian entities that make choices based on cold, cost-benefit calculations. People expect humans, on the other hand, to follow “deontological” moral rules such as “do not actively discriminate against another person.”1 These expectations indeed reflect machines’ and humans’ unique moral strengths. Robots’ moral advantage is their “blindness” to context. However, research shows humans prefer deontological decision makers who consider subjective factors related to harm and injustice to perfectly utilitarian decision makers.2

In work explicitly examining people’s aversion to machines making moral decisions, psychologist Yochanan Bigman and Kurt Gray found that people indeed preferred humans instead of machines to make moral decisions in medical, military, and legal contexts, yet people were receptive to machines in an advisory role.3 In one study asking participants whether a doctor, a computer system named Healthcomp (capable of rational, statistics-based thinking), or a doctor advised by Healthcomp should decide about pursuing a risky surgery for a child, a majority of people favored the doctor advised by the machine. Thus, one method of optimizing human-machine partnerships is letting robots guide decision processes through intensive utilitarian calculations and letting humans ultimately correct for any additional moral violations.

My colleague, sociologist Brian Uzzi, has written about this first prescription while discussing housing rental company Airbnb’s response to racial discrimination complaints.4 Airbnb’s issues have included hosts sending racist messages to potential renters and denying or canceling renters’ plans based on their race, as well as a report showing hosts were less likely to rent to individuals with stereotypically African American–sounding names. Uzzi describes how machine learning algorithms could scour user-generated text on the Airbnb website or on Twitter to identify “red flag” phrases that tend to predict discriminatory patterns of behavior from property hosts. Humans may not be able to identify the precise words that predict such behavior, and they definitely do not have the time to comb through countless terabytes of text-based data looking for red flag terms. This algorithmic search process, however, as Uzzi describes, is insufficient on its own to prevent large-scale discrimination. Such an effort would require comparing an algorithmic search process to information about property hosts through vetting them, as well as ensuring the algorithm is not developing its own biases (more on this below). The vetting process would involve asking hosts if they have a history of prejudiced behavior and favor certain races more than others. Here, human discrimination and diversity experts would need to design surveys and other methodological tools to obtain information directly and accurately from these hosts. And this is where humans would have to decide on the moral rule whereby we consider another person to be racially prejudiced or discriminatory. Machine learning can pinpoint hot-button words to identify people at risk of discriminating, but confirming these suspicions requires a human being asking people directly about their behavior and attitudes toward discrimination.

There exist several cases where machines left unchecked have produced more rather than less race-based and gender-based discrimination.



Download



Copyright Disclaimer:
This site does not store any files on its server. We only index and link to content provided by other sites. Please contact the content providers to delete copyright contents if any and email us, we'll remove relevant links or contents immediately.